Conversation
# Conflicts: # modules/ROOT/nav.adoc
Added 7 new documentation files for AI Gateway: - what-is-ai-gateway.adoc: Overview, problem/solution framing, common patterns - quickstart-enhanced.adoc: Step-by-step quickstart with time markers - observability-logs.adoc: Request logs, filtering, and debugging - observability-metrics.adoc: Dashboards, analytics, and cost tracking - migration-guide.adoc: Safe migration from direct provider integration - cel-routing-cookbook.adoc: CEL routing patterns with examples - mcp-aggregation-guide.adoc: MCP aggregation and orchestration All files follow Redpanda documentation standards: - Sentence case headings - Imperative verbs for action headings - AsciiDoc format - Comprehensive placeholders for product-specific details Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add personas, learning objectives, and prerequisites to all AI Gateway documentation pages. Remove DRAFT prefixes from titles and time estimates from quickstart. Fix passive voice in multiple locations. Changes: - Add page-personas attributes to all 7 files - Add learning objectives in ABCD format - Add prerequisites sections where missing - Remove "DRAFT:" from all page titles - Remove time estimates from quickstart-enhanced.adoc - Fix passive voice constructions - Improve page descriptions - Preserve all placeholder comments for future content Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Provide admin and user guides for configuring Claude Code, Cline, Continue.dev, Cursor IDE, and GitHub Copilot to work with AI Gateway, enabling centralized LLM routing and MCP tool aggregation. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Fix period usage in numbered steps and bold labels across all AI agent docs - Add descriptive link text to xref anchor links for better accessibility - Redesign monitor-agents.adoc reducing lists from 9 to 2, converting to prose and tables - Create observability index page - Fix broken xref from billing.adoc to observability/concepts.adoc
- Change negative headings to positive action-oriented headings - Add explicit 'Enter this query' instructions for each test scenario - Add guidance on what to watch for in the conversation panel - Specify when to start new sessions for context clearing
Moved monitoring how-tos into context where users need them: - Monitor Agents now in agents section - Monitor MCP Servers now in mcp/remote section - Observability concepts remain centralized as single source of truth This follows the "procedures in context, concepts centralized" pattern, reducing navigation overhead and improving task completion. Also removed unnecessary observability index page since only one page remains in that section. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Sessions and tasks topics are being soft-hidden with __ prefix as they are internal implementation details. Updated documentation to focus on user-facing monitoring features (transcripts and inspector). Changes: - Remove "Agent data topics" section from concepts.adoc with schemas - Remove "Consume agent data topics" section from monitor-agents.adoc - Update troubleshooting.adoc to reference transcripts instead of topics - Update learning objective to "Track token usage and performance metrics" - Fix xref anchor links to include descriptive text - Fix shipping carrier name to comply with Google style guide Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…oud-team-polish-clean-up # Conflicts: # modules/ROOT/nav.adoc # modules/ai-agents/pages/mcp/index.adoc
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
modules/ai-agents/examples/mcp-tools/processors/order_workflow.yaml (3)
104-107:⚠️ Potential issue | 🟡 Minor
customer_tierdescription omits the"premium"output value.The
descriptionsays"optional: vip, standard"but theroute_by_priorityprocessor emits"processing_tier": "premium"as a third tier. Readers inferring possible tier values from the meta schema will miss this case.📝 Proposed fix
- description: "Customer tier (optional: vip, standard)" + description: "Customer tier (optional: vip, standard). Orders over $1000 are auto-elevated to premium tier."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/processors/order_workflow.yaml` around lines 104 - 107, Update the metadata for the schema field named customer_tier so its description lists all possible tier values — include "premium" alongside "vip" and "standard" — because the route_by_priority processor emits a processing_tier value of "premium"; revise the description text for customer_tier to reflect these three explicit options.
33-57:⚠️ Potential issue | 🟡 MinorSwitch branch ordering makes the VIP path unreachable for high-value VIP customers.
The
total > 1000check is evaluated first. A VIP customer whose order exceeds $1,000 will always be routed to the premium branch and will never receive VIP perks (expedited_shipping,white_glove_service). If VIP tier should supersede order value, swap the order of the first two cases.🔀 Proposed fix — evaluate VIP tier before order value
switch: + - check: 'this.customer_tier == "vip"' + processors: + - label: mock_vip_processing + mutation: | + # Mock VIP processing + root = this.merge({ + "processing_tier": "vip", + "processing_time_estimate": "1-2 hours", + "assigned_rep": "vip-team@example.com", + "priority_score": 90, + "perks": ["expedited_shipping", "white_glove_service"] + }) + - check: 'this.total > 1000' processors: - label: mock_high_value_processing mutation: | # Mock premium processing root = this.merge({ "processing_tier": "premium", "processing_time_estimate": "2-4 hours", "assigned_rep": "premium-team@example.com", "priority_score": 95 }) - - check: 'this.customer_tier == "vip"' - processors: - - label: mock_vip_processing - mutation: | - # Mock VIP processing - root = this.merge({ - "processing_tier": "vip", - "processing_time_estimate": "1-2 hours", - "assigned_rep": "vip-team@example.com", - "priority_score": 90, - "perks": ["expedited_shipping", "white_glove_service"] - }) - - processors:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/processors/order_workflow.yaml` around lines 33 - 57, The switch currently evaluates 'this.total > 1000' before 'this.customer_tier == "vip"', making the mock_vip_processing branch unreachable for high-value VIP orders; reorder the cases so the check 'this.customer_tier == "vip"' (label mock_vip_processing) appears before 'this.total > 1000' (label mock_high_value_processing) to ensure VIP logic (perks, priority_score, etc.) is applied first, preserving the existing mutations in both processors.
73-80:⚠️ Potential issue | 🟡 Minor
estimated_fulfillmentis a hardcoded placeholder despite the comment claiming it is calculated.Line 74 correctly parses
$max_hoursfromprocessing_time_estimate, but then it is only stored asprocessing_time_hours(line 80). Theestimated_fulfillmentfield (line 79) is never derived from$max_hours, contradicting the comment on line 73. As example code that readers will reference, the misleading comment and unused variable could cause confusion.🛠️ Proposed fix — compute `estimated_fulfillment` from `$max_hours`
- "estimated_fulfillment": "TBD - calculated based on processing tier", + "estimated_fulfillment": "Within " + $max_hours.string() + " hours",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/processors/order_workflow.yaml` around lines 73 - 80, The code parses max_hours from processing_time_estimate into the variable max_hours but never uses it to set estimated_fulfillment; update the logic around the this.merge call (where estimated_fulfillment and processing_time_hours are set) to compute estimated_fulfillment from max_hours (e.g., add max_hours to now() and format the timestamp) and assign that computed timestamp string to "estimated_fulfillment" while keeping "processing_time_hours": $max_hours so the placeholder is replaced by the calculated fulfillment datetime; reference symbols: processing_time_estimate, max_hours, estimated_fulfillment, this.merge, root.modules/ai-agents/pages/mcp/remote/quickstart.adoc (1)
64-69:⚠️ Potential issue | 🟡 Minor
--operation allgrants more than "produce and consume."The step text says "produce and consume" but
--operation allgrants every Kafka operation on the topic (including describe, alter, delete, etc.). The matching Data Plane API call at line 169 usesOPERATION_ALLfor the same reason.For a quickstart that serves as a copy-paste reference, scoping this to only the required operations (
write+read) is both more accurate and teaches the principle of least privilege.🔒 Proposed fix (rpk tab)
-rpk acl create --allow-principal User:mcp --operation all --topic events +rpk acl create --allow-principal User:mcp --operation write --topic events +rpk acl create --allow-principal User:mcp --operation read --topic events🔒 Proposed fix (Data Plane API tab, line 169)
- "operation": "OPERATION_ALL", + "operation": "OPERATION_WRITE",Repeat the call for
OPERATION_READ, or batch using a list if the API supports it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/mcp/remote/quickstart.adoc` around lines 64 - 69, The rpk ACL command currently uses --operation all which grants excessive permissions; change the rpk acl create invocation that targets User:mcp and topic events to grant only the required operations (read and write) instead of all, i.e., replace --operation all with the appropriate flags for read and write operations; likewise update the Data Plane API usage that currently passes OPERATION_ALL (around the call that configures ACLs for the mcp user) to use OPERATION_READ and OPERATION_WRITE (or a list/batch of those two operations) so the ACLs follow least privilege.modules/ai-agents/pages/mcp/overview.adoc (1)
29-33:⚠️ Potential issue | 🟠 MajorMissing
::on the definition list term breaks AsciiDoc rendering.Line 34 has
Remote MCP::(correct dlist syntax), but line 29 readsRedpanda Cloud Management MCP Serverwithout the required::suffix. AsciiDoc will render this as a plain paragraph rather than a definition list term, and the+continuation on line 31 will not attach correctly to any list item.📝 Proposed fix
-Redpanda Cloud Management MCP Server -A pre-built server that gives AI agents access to Redpanda Cloud APIs. It runs on your computer and lets you manage clusters, topics, and other resources through natural language. +Redpanda Cloud Management MCP Server:: +A pre-built server that gives AI agents access to Redpanda Cloud APIs. It runs on your computer and lets you manage clusters, topics, and other resources through natural language.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/mcp/overview.adoc` around lines 29 - 33, Add the missing AsciiDoc definition list marker by changing the heading term "Redpanda Cloud Management MCP Server" to "Redpanda Cloud Management MCP Server::" so it becomes a proper dlist term and the subsequent '+' continuation and example line attach correctly; verify the existing "Remote MCP::" remains unchanged and that the example line ("Example: ...") is part of the same list item.
🟠 Major comments (19)
modules/ai-agents/examples/mcp-tools/processors/enrich_order.yaml-7-9 (1)
7-9:⚠️ Potential issue | 🟠 MajorStatic URL produces identical responses for every message — add Bloblang interpolation to identify the target record.
url: "https://api.example.com/lookup"carries no order or customer identifier, so every message through this processor will hit the same URL and receive the same generic response. Theurlfield supports Bloblang interpolation using${! ... }syntax, so the order or customer identifier present in the incoming message should be embedded in the URL (or passed as a query parameter). Without this, the processor cannot fulfill its stated purpose of enriching a specific order.🐛 Proposed fix — inject the order identifier via Bloblang interpolation
- url: "https://api.example.com/lookup" + url: "https://api.example.com/lookup?order_id=${! this.order_id }"Adjust the field name (
order_id) to match whatever identifier field is present in the incoming message payload.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/processors/enrich_order.yaml` around lines 7 - 9, The http processor's static url field is causing identical responses for every message; update the http -> url value to use Bloblang interpolation (the ${! ... } syntax) to inject the order/customer identifier from the incoming message (e.g., order_id or customer.id) into the path or as a query parameter so each request targets the specific record; locate the http processor's url entry in the enrich_order.yaml and replace the static string with a Bloblang expression that references the correct identifier field used by your pipeline.modules/ai-agents/examples/mcp-tools/processors/openai_chat.yaml-15-15 (1)
15-15:⚠️ Potential issue | 🟠 MajorMissing quotes in
json()interpolation will cause a runtime failure.In Redpanda Connect config string interpolation (
${! ... }), thejson()function requires a quoted string argument to reference a message field.json(feedback_text)passes an unquoted identifier, which cannot be resolved. The correct syntax isjson("feedback_text"), as shown in the official documentation.🐛 Proposed fix
- Feedback: ${! json(feedback_text) } + Feedback: ${! json("feedback_text") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/processors/openai_chat.yaml` at line 15, The interpolation uses json(feedback_text) which is unquoted and will fail at runtime; update the Redpanda Connect interpolation in openai_chat.yaml to call json with a quoted field name (use json("feedback_text")) so the connector can resolve the message field correctly and avoid the runtime error.modules/ai-agents/examples/mcp-tools/inputs/event_driven_workflow.yaml-25-25 (1)
25-25:⚠️ Potential issue | 🟠 MajorJSON injection risk: use Bloblang object construction instead of string interpolation for HTTP body.
${! this.order_id }is spliced directly into a JSON string literal. Iforder_idcontains",\, or, "injected": "value", the resulting body is either malformed JSON or an injection payload.json("items")has the same concern if items values contain special characters.Use a
mutation:processor to build the payload as a native Bloblang object and then serialize it, rather than constructing raw JSON via string interpolation.🔧 Proposed fix — build body with a mutation processor
- - check: this.event_type == "order_created" - processors: - - http: - url: "${secrets.INVENTORY_API}/reserve" - verb: POST - headers: - Content-Type: application/json - body: '{"order_id": "${! this.order_id }", "items": ${! json("items") }}' + - check: this.event_type == "order_created" + processors: + - mutation: | + root = { + "order_id": this.order_id, + "items": this.items + } + - http: + url: "${secrets.INVENTORY_API}/reserve" + verb: POST + headers: + Content-Type: application/json + body: '${! content() }'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/inputs/event_driven_workflow.yaml` at line 25, The HTTP request body currently uses string interpolation ("body: '{\"order_id\": \"${! this.order_id }\", \"items\": ${! json(\"items\") }}'") which allows JSON injection; replace this with a mutation processor that builds a native Bloblang object (e.g., set root to {"order_id": this.order_id, "items": this.items} via a mutation/bloblang mapping) and then serialize that object for the HTTP body (use the serializer/json or a json() function) — locate the existing "body: '...'" line and the HTTP request processor and replace the interpolated string with the mutation-built object + JSON serialization.modules/ai-agents/examples/mcp-tools/inputs/event_driven_workflow.yaml-16-34 (1)
16-34:⚠️ Potential issue | 🟠 Major
processorsis nested insideredpanda:— should be a sibling of it.In Redpanda Connect, input-level processors are specified as a sibling of the input component key (e.g.,
redpanda:,kafka:), not as a child field of the plugin configuration. Here,processors:is indented at the same level asseed_brokers,topics, andtls, making it a field of theredpandaplugin config. Redpanda Connect does not recognizeprocessorsas a valid field for the redpanda input, so the switch routing logic will not execute.🔧 Proposed fix — move `processors` to be a sibling of `redpanda:`
label: order-workflow # tag::component[] redpanda: seed_brokers: [ "${REDPANDA_BROKERS}" ] topics: [ "order-events" ] consumer_group: "workflow-orchestrator" tls: enabled: true sasl: - mechanism: "${REDPANDA_SASL_MECHANISM}" username: "${REDPANDA_SASL_USERNAME}" password: "${REDPANDA_SASL_PASSWORD}" - processors: + +processors: - switch: - check: this.event_type == "order_created" processors: - http: url: "${secrets.INVENTORY_API}/reserve" verb: POST headers: Content-Type: application/json body: '{"order_id": "${! this.order_id }", "items": ${! json("items") }}' - check: this.event_type == "payment_confirmed" processors: - http: url: "${secrets.FULFILLMENT_API}/ship" verb: POST headers: Content-Type: application/json body: '{"order_id": "${! this.order_id }"}' # end::component[]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/mcp-tools/inputs/event_driven_workflow.yaml` around lines 16 - 34, The processors block is currently nested under the redpanda input and therefore ignored; move the top-level processors: mapping so it is a sibling of the redpanda: key (not indented under it). Specifically relocate the entire processors: -> - switch: ... section out of the redpanda plugin block so the switch processor (and its child http processors that call the INVENTORY_API and FULFILLMENT_API with bodies referencing this.order_id and json("items")) are at the same level as redpanda: in the YAML, ensuring Redpanda Connect will recognize and execute the input-level processors.modules/ai-agents/partials/integrations/continue-user.adoc-157-163 (1)
157-163:⚠️ Potential issue | 🟠 Major
claude-haikuwithout a version number is not a valid model identifier.The correct Anthropic API identifier for the Haiku model is
claude-haiku-4-5. Using bareclaude-haikuwill fail when the request reaches the Anthropic API (or the gateway's model lookup). This unversioned identifier appears at lines 160, 278, 567, 667, and 750.Additionally, the file uses dot notation for other Claude models (
claude-sonnet-4.5,claude-opus-4.6) while Anthropic's official API identifiers use hyphen notation (claude-sonnet-4-5,claude-opus-4-6). If these are Redpanda AI Gateway–specific aliases, that should be made explicit in the documentation to prevent users from attempting to use the same model names outside the gateway and encountering errors.✏️ Suggested fix
"tabAutocompleteModel": { "title": "Gateway - Claude Haiku (autocomplete)", "provider": "anthropic", - "model": "claude-haiku", + "model": "claude-haiku-4-5", "apiKey": "YOUR_REDPANDA_API_KEY", "apiBase": "<your-gateway-endpoint>" }Apply the same change to lines 278, 567, 667, and 750.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/integrations/continue-user.adoc` around lines 157 - 163, Replace unversioned and dot-notated Anthropic model identifiers with their official hyphenated, versioned names: change "claude-haiku" to "claude-haiku-4-5", change "claude-sonnet-4.5" to "claude-sonnet-4-5", and change "claude-opus-4.6" to "claude-opus-4-6" wherever these strings appear (e.g., the "tabAutocompleteModel" block and the other model entries referenced in the diff); if any of those shorter or dot-form names are Redpanda AI Gateway-specific aliases, add a short note in the same doc block clarifying they are gateway aliases and not valid for the official Anthropic API.modules/ai-agents/pages/observability/ingest-custom-traces.adoc-88-91 (1)
88-91:⚠️ Potential issue | 🟠 MajorIncorrect OTLP status code values — 0 is
UNSET, notOK.Lines 90 and 541 state
0 = OK/0 for success, which contradicts the OTLP proto specification. The correct status code enum is:STATUS_CODE_UNSET = 0,STATUS_CODE_OK = 1,STATUS_CODE_ERROR = 2.Users setting
code = 0expecting success will instead produce spans with UNSET status, which does not signal success—it means the instrumentation did not explicitly set a status.Fix locations
Line 90:
-| Operation status with code (0 = OK, 2 = ERROR) +| Operation status with code (0 = UNSET, 1 = OK, 2 = ERROR)Line 541:
-* `status` with a `code` field (0 for success, 2 for error) +* `status` with a `code` field (1 for success/OK, 2 for error; 0 means unset)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/observability/ingest-custom-traces.adoc` around lines 88 - 91, Update the documentation for the `status` field to use the correct OTLP enum mappings: change any instance that says "0 = OK" or "0 for success" to "0 = STATUS_CODE_UNSET" and "1 = STATUS_CODE_OK", and ensure errors are shown as "2 = STATUS_CODE_ERROR"; verify and update both occurrences referencing the `status` field so the docs reflect that 1 is success (OK) and 0 means UNSET.modules/ai-agents/partials/integrations/cline-user.adoc-313-319 (1)
313-319:⚠️ Potential issue | 🟠 MajorCEL expression uses wrong request path —
request.messagesdoesn't exist in the schema.The CEL routing example uses
request.messages[0].content, but the gateway's documented request object schema (defined incel-routing-cookbook.adoc) exposes messages underrequest.body.messages. Usingrequest.messageswill cause a CEL evaluation error or silent mis-route when users apply this pattern.🛠 Proposed fix
// Route simple edits to cost-effective model -request.messages[0].content.contains("fix typo") || -request.messages[0].content.contains("rename") ? +request.body.messages.size() > 0 && ( + request.body.messages[0].content.contains("fix typo") || + request.body.messages[0].content.contains("rename")) ? "anthropic/claude-haiku" : "anthropic/claude-sonnet-4.5"(The
size() > 0guard follows the safe-access pattern required for array fields, percel-routing-cookbook.adoc.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/integrations/cline-user.adoc` around lines 313 - 319, The CEL example uses the wrong request path (request.messages) which doesn't exist; update the expression to reference request.body.messages and add a safe-access guard: replace occurrences of request.messages[0].content with request.body.messages.size() > 0 && request.body.messages[0].content.contains(...) so the routing expression (used where the ternary picks "anthropic/claude-haiku" vs "anthropic/claude-sonnet-4.5") evaluates correctly without CEL errors.modules/ai-agents/pages/mcp/remote/monitor-mcp-servers.adoc-71-96 (1)
71-96:⚠️ Potential issue | 🟠 MajorAdd
--use-schema-registry=valueflag to bothrpk topic consumecommands and fix inconsistent jq paths.The
redpanda.otel_tracestopic uses Protobuf Schema Registry format, which requires explicit deserialization. Without the--use-schema-registry=valueflag,rpk topic consumeoutputs raw Protobuf bytes, and piping tojqwill fail.Additionally, the two commands use inconsistent jq selectors:
- Line 74 correctly accesses the message via
.value | select(...)(where.valueis the rpk output wrapper)- Line 95 incorrectly accesses
select(.status.code == 2)at the top levelBoth commands should use
.value | select(...)to access fields within the deserialized message:Corrected commands
[,bash] ---- rpk topic consume redpanda.otel_traces --use-schema-registry=value --offset start \ | jq '.value | select(.instrumentationScope.name == "rpcn-mcp" and .name == "weather")' ---- ... [,bash] ---- rpk topic consume redpanda.otel_traces --use-schema-registry=value --offset start \ | jq '.value | select(.status.code == 2)' ----🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/mcp/remote/monitor-mcp-servers.adoc` around lines 71 - 96, Add the missing --use-schema-registry=value flag to both rpk topic consume commands and normalize the jq selectors to read from the rpk output wrapper (.value); specifically, update the first rpk invocation referenced by the selector '.value | select(.instrumentationScope.name == "rpcn-mcp" and .name == "weather")' to include the flag, and update the error-filtering rpk invocation to use '.value | select(.status.code == 2)' instead of selecting at the top level so both commands correctly deserialize Protobuf via the schema registry and access message fields consistently.modules/ai-agents/pages/ai-gateway/builders/connect-your-agent.adoc-311-328 (1)
311-328:⚠️ Potential issue | 🟠 Major
clientis undefined intest_models()— broken example codeThe
test_models()function on Line 320 callsclient.chat.completions.create(...)butclientis never defined inside this function or at module level in this snippet. Users copying this example will receive aNameError: name 'client' is not definedat runtime.🐛 Proposed fix
+import os +from openai import OpenAI + +client = OpenAI( + base_url=os.getenv("REDPANDA_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_API_KEY"), +) + def test_models(): """Test multiple models through the gateway""" models = [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/ai-gateway/builders/connect-your-agent.adoc` around lines 311 - 328, The test_models function calls client.chat.completions.create but never defines client, causing a NameError; fix by either (A) changing test_models to accept a client parameter (e.g., def test_models(client):) and use that passed-in client for client.chat.completions.create, or (B) instantiate the gateway client inside test_models before the loop (create the same client object used elsewhere in examples) and then call client.chat.completions.create; update any callers/examples accordingly so test_models has a valid client instance..github/workflows/test-mcp-examples.yaml-53-62 (1)
53-62:⚠️ Potential issue | 🟠 MajorFix SC2144 and SC2211 shell script bugs in the cookbook test loop.
Two real actionlint/shellcheck errors that can cause tests to be silently skipped or misbehave:
- SC2144 (line 56):
-fdoesn't work reliably with globs — the recommended fix is to use aforloop to expand the glob and test each result individually. When the glob expands to zero matches (no cookbook withtest-*.sh), bash leaves the unexpanded literal as the argument;-fon that literal string returns false, silently skipping the directory.- SC2211 (line 59):
./test-*.shis a glob used as a command name. If multiple scripts match, the first becomes the command and the rest become arguments, producing incorrect behaviour.- Bonus — bare
cdwithout subshell: Ifcdfails mid-loop, subsequent iterations run in the wrong directory. A subshell eliminates the need forcd -entirely.🐛 Proposed fix
- - name: Run cookbook tests - run: | - for dir in modules/develop/examples/cookbooks/*/; do - if [[ -f "${dir}test-"*".sh" ]]; then - echo "Testing ${dir}..." - cd "${dir}" - ./test-*.sh - cd - > /dev/null - fi - done + - name: Run cookbook tests + run: | + for dir in modules/develop/examples/cookbooks/*/; do + for script in "${dir}"test-*.sh; do + if [[ -f "$script" ]]; then + echo "Testing ${dir}..." + (cd "${dir}" && ./"$(basename "$script")") + fi + done + done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-mcp-examples.yaml around lines 53 - 62, The loop over modules/develop/examples/cookbooks/*/ should be changed to explicitly expand test-*.sh files rather than using -f on a glob and to execute each matching script directly; replace the single if [[ -f "${dir}test-"*".sh" ]] and the glob-as-command ./test-*.sh with a nested for that iterates over "${dir}"test-*.sh, checks each file with a plain -f (or -x) test, and runs each matched script as a single command, and run the per-directory execution inside a subshell (e.g., (cd "$dir" && ./the-script)) so a failing cd cannot leak to later iterations. This fixes SC2144 (don’t test unexpanded glob) and SC2211 (don’t use a glob as the command) while removing the need for cd -.modules/ai-agents/partials/integrations/claude-code-user.adoc-152-167 (1)
152-167:⚠️ Potential issue | 🟠 MajorRemove or clarify the claim about
${VAR}interpolation in~/.claude.json'smcpServerssection.Official Anthropic documentation explicitly supports
${VAR}expansion only in.mcp.jsonfiles (project-scope), not in~/.claude.json(local-scope). The note at line 167 claims this feature is available in both, which is inaccurate. To avoid silent configuration failures, either:
- Move the server configuration to a project-scope
.mcp.jsonfile where${VAR}expansion is officially supported, or- Remove the claim that
${VAR}interpolation works in~/.claude.jsonand clarify the actual scope limitations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/integrations/claude-code-user.adoc` around lines 152 - 167, The note claiming `${VAR}` interpolation in the `mcpServers` section of `~/.claude.json` is inaccurate; update the text around `mcpServers` to either remove the assertion or explicitly state that `${VAR}` expansion is only supported in project-scope `.mcp.json` files and not in `~/.claude.json`, and show the actionable guidance to move the server config (e.g., entries using `REDPANDA_GATEWAY_URL` and `REDPANDA_API_KEY`) into a `.mcp.json` if environment-variable interpolation is required.modules/ai-agents/partials/integrations/claude-code-user.adoc-72-83 (1)
72-83:⚠️ Potential issue | 🟠 Major
--api-providerand--base-urlare not validclaude config setflags — use environment variables instead.The correct way to route Claude Code through a custom gateway is to set
ANTHROPIC_BASE_URLandANTHROPIC_AUTH_TOKEN. These can be configured as shell environment variables or persisted in~/.claude/settings.json:🐛 Proposed fix
-To route Claude Code's LLM requests through the gateway instead of directly to Anthropic: - -[,bash] ----- -claude config set \ - --api-provider redpanda \ - --base-url <your-gateway-endpoint> ------ - -This routes all Claude model requests through your gateway, giving you centralized observability and policy enforcement. +To route Claude Code's LLM requests through the gateway instead of directly to Anthropic: + +[,bash] +----- +export ANTHROPIC_BASE_URL="<your-gateway-endpoint>" +export ANTHROPIC_AUTH_TOKEN="<your-api-key>" +----- + +Alternatively, add these to `~/.claude/settings.json`: + +[,json] +----- +{ + "env": { + "ANTHROPIC_BASE_URL": "<your-gateway-endpoint>", + "ANTHROPIC_AUTH_TOKEN": "<your-api-key>" + } +} +----- + +This routes all Claude model requests through your gateway, giving you centralized observability and policy enforcement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/integrations/claude-code-user.adoc` around lines 72 - 83, The docs incorrectly show using flags with the CLI command "claude config set"; replace that guidance to instruct users to set the ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN environment variables (or store them in the Claude settings JSON) to route Claude Code through a custom gateway rather than using the non-existent --api-provider and --base-url flags; update the example block and surrounding text to reference ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN and mention the alternative of persisting those values in the Claude settings JSON.modules/ai-agents/examples/pipelines/fraud-detection-routing.yaml-33-35 (1)
33-35:⚠️ Potential issue | 🟠 Major
root = thisinresult_maplikely overwrites original transaction fields; silent JSON-parse failure creates a false-negative path.Two distinct issues here:
1.
root = thisclobbers the source message.
The branch processor maps the result "back into the source message" viaresult_map. The canonical pattern for adding a field from branch results isroot.some_field = this, notroot = this. Examples in the docs consistently useroot.cardinality = thisso that original message fields are preserved. Withroot = this, the originalid,amount,merchant, anduser_idfields are replaced by the LLM agent's response object beforeroot.fraud_analysisis even set. Drop theroot = thisline:🐛 Proposed fix
result_map: | - root = this root.fraud_analysis = content().parse_json().catch({})2. Silent false-negative when JSON parsing fails.
LLMs frequently wrap JSON in markdown fences (```json ... ```). When that happens,content().parse_json()throws,.catch({})substitutes an empty object,this.fraud_analysis.fraud_scoreisnull, and the switch cases both evaluate tofalse—routing every transaction totransactions-cleared. For a fraud detection pipeline this is a dangerous silent failure mode. Add explicit error handling:🛡️ Suggested hardening
result_map: | root.fraud_analysis = content().string().re_replace_all("```(?:json)?\\s*", "").parse_json().catch({}) + + - mapping: | + root = this + meta fraud_score = this.fraud_analysis.fraud_score.number().catch(-1)Using
-1as the catch value means a parse failure will never silently clear a transaction — it routes totransactions-clearedonly when the score genuinely indicates no fraud, rather than when the LLM response was unparseable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/pipelines/fraud-detection-routing.yaml` around lines 33 - 35, The result_map currently uses "root = this" which overwrites the original transaction fields; remove the "root = this" assignment and instead only assign the branch output into a nested field (e.g., set root.fraud_analysis = content().parse_json()...), and harden JSON parsing by stripping code fences before parsing and ensure parse failures produce a sentinel value (e.g., set root.fraud_analysis.fraud_score = content().parse_json().catch({}).fraud_score.number().catch(-1) or equivalent) so that parse errors yield -1 rather than silently treating the transaction as cleared; update the mapping references (result_map, content().parse_json(), root.fraud_analysis, fraud_score.number().catch(-1)) accordingly.modules/ai-agents/examples/agents/compliance-agent-prompt.txt-68-68 (1)
68-68:⚠️ Potential issue | 🟠 MajorFactual error: fraud dispute acknowledgment is 24 hours, not 24 days.
The "24-30 days" range reads as a day range. However,
check_regulatory_requirements.yamlstores the fraud branch timeline as"acknowledge_dispute_hours": 24—a 24-hour deadline, not 24 days. The billing-error branch usesacknowledge_dispute_days: 30, which is where the "30" comes from. As written, an LLM following this prompt would tell users that fraud disputes can be acknowledged in up to 24-30 days, which contradicts both the tool data and Regulation E expectations.✏️ Suggested fix
- - Acknowledge dispute: 24-30 days (varies by type) + - Acknowledge dispute: 24 hours (fraud) or 30 days (billing error)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/examples/agents/compliance-agent-prompt.txt` at line 68, The prompt line "Acknowledge dispute: 24-30 days (varies by type)" is factually wrong for fraud disputes; update the prompt in compliance-agent-prompt.txt to distinguish the two branches: state fraud disputes must be acknowledged within 24 hours and billing errors within 30 days, matching check_regulatory_requirements.yaml's keys acknowledge_dispute_hours (24) and acknowledge_dispute_days (30); ensure the wording explicitly references "fraud: 24 hours" and "billing errors: 30 days" so an LLM will not conflate the timelines.modules/ai-agents/pages/agents/tutorials/transaction-dispute-resolution.adoc-461-479 (1)
461-479:⚠️ Potential issue | 🟠 MajorTutorial grants overly broad cluster-level ACLs without a least-privilege caveat.
Line 463 instructs users to click the Clusters tab and select Allow all, and line 477 does the same for Consumer groups. For a tutorial the reader will likely follow literally, granting blanket cluster-level and consumer group-level permissions without noting this is for tutorial convenience (and should be scoped down in production) sets a poor security example.
Add a note clarifying that
Allow allis used here for simplicity and that production deployments should scope ACLs to only the required operations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/agents/tutorials/transaction-dispute-resolution.adoc` around lines 461 - 479, The tutorial currently instructs selecting "Allow all" on the Clusters tab and Consumer groups tab, which is overly broad; update the text around the "Clusters" and "Consumer groups" instructions (and the ACL example for principal `dispute-pipeline-user`) to add a concise caution: state that "Allow all" is used here for tutorial simplicity and must NOT be used in production, and advise scoping ACLs to the minimal required hosts, resource types/selectors (e.g., topic names starting with `bank.`) and operations for production deployments or providing links to docs on least-privilege ACLs.modules/ai-agents/partials/ai-hub/configure-ai-hub.adoc-11-12 (1)
11-12:⚠️ Potential issue | 🟠 MajorGoogle Gemini is claimed as a supported provider but is entirely absent from the backend pools, routing rules, and credential management sections.
The introduction and learning objectives (lines 7, 11, 48) state "OpenAI, Anthropic, and Google Gemini" as supported providers, but:
- The 6 documented backend pools cover only OpenAI (2 pools) and Anthropic (4 pools) — no Gemini pool exists.
- The 17 routing rules section describes only
openai/*andanthropic/*prefixes.- There is no "Add Google Gemini credentials" section under Manage provider credentials.
- Cost tracking lists only OpenAI and Anthropic.
Either remove Google Gemini from the overview claims, or add the corresponding backend pool documentation, routing rules (for example,
google/*prefix), and credential management steps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/ai-hub/configure-ai-hub.adoc` around lines 11 - 12, The doc claims "OpenAI, Anthropic, and Google Gemini" but Gemini is missing from backend pools, routing rules, credential management, and cost tracking; either remove Gemini from the overview or add corresponding documentation: add a Gemini backend pool entry under "backend pools" (e.g., a named Gemini pool), include routing rules that match the Gemini prefix (recommend using "google/*" or "gemini/*" alongside existing `openai/*` and `anthropic/*` examples), add an "Add Google Gemini credentials" subsection under "Manage provider credentials" showing how to store provider keys, and update the Cost tracking section to include Gemini; update mentions in the introduction and learning objectives to match whichever option you choose so the three lists are consistent with the rest of the doc.modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc-569-569 (1)
569-569:⚠️ Potential issue | 🟠 MajorMultiple PLACEHOLDER comments in configuration examples.
Lines 569, 631, and 991 contain
# PLACEHOLDER: ...comments inside code blocks that will be rendered to users. These should either be replaced with actual configuration formats or removed before publishing.Also applies to: 631-631, 991-991
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc` at line 569, Replace or remove the inline placeholder comments that will render to users (e.g. the strings "# PLACEHOLDER: Actual configuration format" and other "# PLACEHOLDER: ..." occurrences) in the mcp-aggregation-guide.adoc code blocks; either substitute each placeholder with the actual configuration snippet/example expected in that block or delete the placeholder lines so no "# PLACEHOLDER" text appears in the published code examples, and ensure the surrounding code block remains syntactically valid for AsciiDoc rendering.modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc-438-438 (1)
438-438:⚠️ Potential issue | 🟠 MajorUnresolved PLACEHOLDER comment will be visible to readers.
Line 438 contains
// PLACEHOLDER: Confirm orchestrator API detailswhich is rendered as an AsciiDoc comment and won't appear in HTML output, but it signals that the content below (tool name, input schema, security, limitations) is unconfirmed. If this information is speculative, it risks misleading users.Would you like me to open an issue to track confirmation of the orchestrator API details before this page goes live?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc` at line 438, Replace the unresolved "// PLACEHOLDER: Confirm orchestrator API details" marker in mcp-aggregation-guide.adoc with either the confirmed orchestrator API details (tool name, input schema, security, limitations) or a clear TODO that references an issue number tracking the confirmation; update the section that follows the placeholder to include the verified API fields under the appropriate headings and, if you can't confirm now, create the issue and insert the issue link next to the TODO so readers and reviewers know this is intentionally outstanding rather than speculative.modules/ai-agents/partials/ai-hub/gateway-modes.adoc-66-70 (1)
66-70:⚠️ Potential issue | 🟠 MajorContradictory information about Google Gemini support.
Lines 66 and 70 list Google Gemini as a supported provider in AI Hub mode, but the "Supported providers" section (lines 127–134) explicitly states only OpenAI and Anthropic are supported and that "Google AI…[is] not yet supported in AI Hub mode." One of these sections is incorrect and should be updated for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/ai-agents/partials/ai-hub/gateway-modes.adoc` around lines 66 - 70, The "AI Hub mode" description and the "Supported providers" section conflict about Google Gemini support; update the content so both are consistent by either removing Google Gemini from the introductory copy under the "What it is" paragraph or by adding Google Gemini to the "Supported providers" list with appropriate caveats; specifically edit the "AI Hub mode" description text that currently names OpenAI, Anthropic, and Google Gemini and/or the "Supported providers" section so both reference the same supported providers (OpenAI and Anthropic) or include Google Gemini support details and status, ensuring the two sections (the "What it is" paragraph and the "Supported providers" section) match.
modules/ai-agents/examples/mcp-tools/inputs/stream_processing.yaml
Outdated
Show resolved
Hide resolved
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…yaml Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Replace non-existent claude config set flags with ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN environment variables. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Description
This PR serves as the EPIC PR for ADP in package 1.
It merges the following PRs:
update cloud overview adp focus -
Transcripts - REPLACED WITH PR #498
AI agents
AI Gateway
AuthZ
Page previews
Global preview - https://deploy-preview-494--rp-cloud.netlify.app/
Checks